284 research outputs found
Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints
This paper investigates the relation between sequential convex programming
(SCP) as, e.g., defined in [24] and DC (difference of two convex functions)
programming. We first present an SCP algorithm for solving nonlinear
optimization problems with DC constraints and prove its convergence. Then we
combine the proposed algorithm with a relaxation technique to handle
inconsistent linearizations. Numerical tests are performed to investigate the
behaviour of the class of algorithms.Comment: 18 pages, 1 figur
A Primal-Dual Algorithmic Framework for Constrained Convex Minimization
We present a primal-dual algorithmic framework to obtain approximate
solutions to a prototypical constrained convex optimization problem, and
rigorously characterize how common structural assumptions affect the numerical
efficiency. Our main analysis technique provides a fresh perspective on
Nesterov's excessive gap technique in a structured fashion and unifies it with
smoothing and primal-dual methods. For instance, through the choices of a dual
smoothing strategy and a center point, our framework subsumes decomposition
algorithms, augmented Lagrangian as well as the alternating direction
method-of-multipliers methods as its special cases, and provides optimal
convergence rates on the primal objective residual as well as the primal
feasibility gap of the iterates for all.Comment: This paper consists of 54 pages with 7 tables and 12 figure
Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems
In this paper, we develop two new randomized block-coordinate optimistic
gradient algorithms to approximate a solution of nonlinear equations in
large-scale settings, which are called root-finding problems. Our first
algorithm is non-accelerated with constant stepsizes, and achieves
best-iterate convergence rate on when the underlying operator is Lipschitz continuous and
satisfies a weak Minty solution condition, where is the
expectation and is the iteration counter. Our second method is a new
accelerated randomized block-coordinate optimistic gradient algorithm. We
establish both and last-iterate convergence
rates on both and for this algorithm under the co-coerciveness of . In
addition, we prove that the iterate sequence converges to a solution
almost surely, and attains a almost sure
convergence rate. Then, we apply our methods to a class of large-scale
finite-sum inclusions, which covers prominent applications in machine learning,
statistical learning, and network optimization, especially in federated
learning. We obtain two new federated learning-type algorithms and their
convergence rate guarantees for solving this problem class.Comment: 30 page
- …